Skip to content

Conversation

ankur-arch
Copy link
Contributor

@ankur-arch ankur-arch commented Aug 11, 2025

Summary by CodeRabbit

  • New Features

    • Rate limiting is now enforced per client (by agent/IP) for consistent throttling across environments.
    • Request allowance reduced to 10 requests per 60 seconds on create/claim endpoints.
    • Rate-limit responses now return a structured JSON error including client IP and trace identifier.
  • Refactor

    • Rate-limiter key updated to use client identity (no public API changes).
  • Tests

    • Rate-limit test script accepts agent_id, sends identifying headers, and defaults to hosted endpoints.

Copy link

coderabbitai bot commented Aug 11, 2025

Note

Currently processing new changes in this PR. This may take a few minutes, please wait...

📥 Commits

Reviewing files that changed from the base of the PR and between 200574e and 3b51143.

📒 Files selected for processing (4)
  • claim-db-worker/src/index.ts (1 hunks)
  • create-db-worker/src/index.ts (2 hunks)
  • package.json (1 hunks)
  • tests/test-rate-limits.sh (4 hunks)
 ___________________________
< Goodbye, overlooked bugs. >
 ---------------------------
  \
   \   (\__/)
       (•ㅅ•)
       /   づ

Walkthrough

Workers now rate-limit by client IP (from x-agent or cf-connecting-ip) and include rayId in error handling; wrangler limits reduced from 100 to 10; the test script adds agent_id headers, updates default endpoints, captures responses, and removes the per-request sleep.

Changes

Cohort / File(s) Summary of Changes
Worker rate-limiting key & error handling
claim-db-worker/src/index.ts, create-db-worker/src/index.ts
Use clientIP = header(x-agent)
Rate limiter configuration
claim-db-worker/wrangler.jsonc, create-db-worker/wrangler.jsonc
Lower unsafe.bindings[0].simple.limit from 100 to 10; period/name/type/namespace unchanged.
Rate limit test script
tests/test-rate-limits.sh
Add fourth arg agent_id (default "meow"); send x-agent, X-Agent, and User-Agent headers; update defaults for CREATE_DB_URL and CLAIM_DB_URL to remote URLs; capture per-request response bodies to /tmp/response_$i.json; print agent info; remove active sleep; update usage/examples.

Sequence Diagram(s)

sequenceDiagram
  participant C as Client
  participant W as Worker
  participant R as RateLimiter

  C->>W: HTTP request (includes x-agent / cf-connecting-ip, cf-ray/x-ray-id)
  W->>W: Extract clientIP and rayId
  W->>R: limit({ key: clientIP })
  alt Allowed
    R-->>W: ok
    W-->>C: proceed with normal response
  else Limited
    R-->>W: limited
    W-->>C: 429 JSON { error, message, code, ip, rayId }
  end
Loading

Tip

CodeRabbit can generate a title for your PR based on the changes with custom instructions.

Add the reviews.auto_title_instructions setting in your project's settings in CodeRabbit to generate a title for your PR based on the changes in the PR with custom instructions.

✨ Finishing Touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch fix-rate-limit

🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

CodeRabbit Commands (Invoked using PR/Issue comments)

Type @coderabbitai help to get the list of available commands.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Status, Documentation and Community

  • Visit our Status Page to check the current availability of CodeRabbit.
  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link

Preview CLIs & Workers are live!

Test the CLIs locally under tag pr41-fix-rate-limit-16883154200:

npx create-db@pr41
npx create-pg@pr41
npx create-postgres@$pr41

Worker URLs
• Create-DB Worker:
• Claim-DB Worker:

These will live as long as this PR exists under tag pr41-fix-rate-limit-16883154200.

Copy link

cloudflare-workers-and-pages bot commented Aug 11, 2025

Deploying with  Cloudflare Workers  Cloudflare Workers

The latest updates on your project. Learn more about integrating Git with Workers.

Status Name Latest Commit Preview URL Updated (UTC)
✅ Deployment successful!
View logs
claim-db-worker 3b51143 Commit Preview URL

Branch Preview URL
Aug 12 2025, 07:44 AM

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 7

📜 Review details

Configuration used: CodeRabbit UI
Review profile: ASSERTIVE
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 4d9bb9d and fb9fea4.

📒 Files selected for processing (5)
  • claim-db-worker/src/index.ts (1 hunks)
  • claim-db-worker/wrangler.jsonc (1 hunks)
  • create-db-worker/src/index.ts (1 hunks)
  • create-db-worker/wrangler.jsonc (1 hunks)
  • tests/test-rate-limits.sh (4 hunks)
🧰 Additional context used
🧬 Code Graph Analysis (1)
create-db-worker/src/index.ts (2)
claim-db-worker/worker-configuration.d.ts (1)
  • env (6799-6799)
create-db-worker/worker-configuration.d.ts (1)
  • env (6794-6794)
🪛 Shellcheck (0.10.0)
tests/test-rate-limits.sh

[info] 44-44: Double quote to prevent globbing and word splitting.

(SC2086)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
  • GitHub Check: Workers Builds: create-db-worker
  • GitHub Check: Workers Builds: claim-db-worker
🔇 Additional comments (3)
create-db-worker/wrangler.jsonc (1)

30-32: Confirm Monitoring & SLO Updates for Reduced Rate Limit

The rate limit has been reduced from 100→10 req/min in both workers:

  • create-db-worker/wrangler.jsonc (lines 30–32)
  • claim-db-worker/wrangler.jsonc (lines 27–29)

Please ensure you:

  • Validate that associated SLOs and error budgets accommodate the new 10 req/min threshold
  • Update any dashboards or alerts (dev/staging/prod) referencing the old limit
  • (Optional) Add an inline runbook comment in each wrangler.jsonc documenting the rationale and expected behavior
tests/test-rate-limits.sh (2)

17-19: Header redundancy and PII

  • x-Agent and x-agent are the same header (case-insensitive); sending both is redundant.
  • User-Agent and custom IDs may be logged server-side; ensure this is acceptable.
-echo "Agent Header: X-Agent: $AGENT_ID"
-echo "User-Agent: prisma-rate-limit-test/$AGENT_ID"
+echo "Agent Header: x-agent: $AGENT_ID"
+echo "User-Agent: prisma-rate-limit-test/$AGENT_ID"

Likely an incorrect or invalid review comment.


8-10: Default production URLs are intentional—no change needed
This script is a manual utility meant to validate your live rate-limit endpoints. Since you can already override CREATE_DB_URL and CLAIM_DB_URL via positional arguments, defaulting to the public URLs is expected behavior.

Likely an incorrect or invalid review comment.

Comment on lines 33 to 35
// Use client IP for consistent rate limiting across environments
const clientIP = request.headers.get('x-agent') || request.headers.get('cf-connecting-ip');
const { success } = await env.CLAIM_DB_RATE_LIMITER.limit({ key: clientIP! });
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

⚠️ Potential issue

Do not trust client header over CF IP; add fallbacks and avoid null keys

Using x-agent before cf-connecting-ip allows trivial rate-limit evasion by rotating x-agent. Also, clientIP can be null in some envs; the non-null assertion ! only silences TS, not runtime.

Refactor to prefer CF-provided IP, then standard proxy headers, then a dev/testing override, and finally a safe default.

-    // Use client IP for consistent rate limiting across environments
-    const clientIP = request.headers.get('x-agent') || request.headers.get('cf-connecting-ip');
-    const { success } = await env.CLAIM_DB_RATE_LIMITER.limit({ key: clientIP! });
+    // Use client IP for consistent rate limiting across environments (prefer CF header).
+    const clientIP =
+      request.headers.get('cf-connecting-ip') ||
+      request.headers.get('x-forwarded-for')?.split(',')[0]?.trim() ||
+      request.headers.get('x-real-ip') ||
+      // Allow explicit test override last
+      request.headers.get('x-agent') ||
+      'anonymous';
+    let success = true;
+    try {
+      ({ success } = await env.CLAIM_DB_RATE_LIMITER.limit({ key: clientIP }));
+    } catch (err) {
+      // Fail-open to avoid blocking in case the binding is unavailable; adjust if you prefer fail-closed.
+      console.error('RateLimiter error', err);
+      success = true;
+    }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
// Use client IP for consistent rate limiting across environments
const clientIP = request.headers.get('x-agent') || request.headers.get('cf-connecting-ip');
const { success } = await env.CLAIM_DB_RATE_LIMITER.limit({ key: clientIP! });
// Use client IP for consistent rate limiting across environments (prefer CF header).
const clientIP =
request.headers.get('cf-connecting-ip') ||
request.headers.get('x-forwarded-for')?.split(',')[0]?.trim() ||
request.headers.get('x-real-ip') ||
// Allow explicit test override last
request.headers.get('x-agent') ||
'anonymous';
let success = true;
try {
({ success } = await env.CLAIM_DB_RATE_LIMITER.limit({ key: clientIP }));
} catch (err) {
// Fail-open to avoid blocking in case the binding is unavailable; adjust if you prefer fail-closed.
console.error('RateLimiter error', err);
success = true;
}
🤖 Prompt for AI Agents
In claim-db-worker/src/index.ts around lines 33 to 35, the code currently trusts
the x-agent header first and uses a non-null assertion for the rate-limit key
which can be null at runtime; instead, prefer the Cloudflare-provided IP
(cf-connecting-ip), then standard proxy headers like x-forwarded-for (taking the
left-most IP), then a controlled dev/testing override header (e.g.
x-dev-client-ip), and finally a safe default string (e.g. "unknown" or
"127.0.0.1"); remove the non-null assertion and build a validated key string
before calling env.CLAIM_DB_RATE_LIMITER.limit({ key }), ensuring you trim and
normalize the chosen header value so it cannot be empty and falls back to the
default.

Comment on lines +27 to 29
"limit": 10,
"period": 60,
},
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

CLAIM_DB limit also tightened to 10 req/min — align expectations

Ensure clients and tests reflect the new ceiling to avoid false alarms. Consider staggered rollout or per-env overrides if needed.


🏁 Script executed:

#!/bin/bash
# Show the wrangler.jsonc around the rate limit config
echo "---- claim-db-worker/wrangler.jsonc (lines 1-100) ----"
sed -n '1,100p' claim-db-worker/wrangler.jsonc || true

echo -e "\n---- All occurrences of 'limit' in JSON/JSONC files ----"
rg --type-add 'jsonc:*.jsonc' --type-add 'json:*.json' -tjson -tjsonc '"limit"' -C 3

echo -e "\n---- References to rate limiting in code/tests ----"
rg -n "limit" -e "claim" -e "db" || true
rg -n "rate" -e "limit" || true

echo -e "\n---- Test files mentioning 10 or 60 ----"
rg -n "10" --glob '*test*' || true
rg -n "60" --glob '*test*' || true

Length of output: 2323


CLAIM_DB rate limit lowered to 10 req/min—update clients & tests

We’ve tightened CLAIM_DB_RATE_LIMITER (limit: 10, period: 60). Any clients or tests assuming a higher ceiling will now hit this threshold. To avoid failures or false alarms:

• tests/test-rate-limits.sh: lower the default TEST_COUNT (currently 110) to ≤ 10 or make it configurable per-env
• Client integrations/docs that reference CLAIM_DB throughput > 10 req/min
• Consider per-environment overrides or a staggered rollout if some workloads need more headroom

🤖 Prompt for AI Agents
In claim-db-worker/wrangler.jsonc around lines 27 to 29, the CLAIM_DB rate
limiter was reduced to 10 requests per 60 seconds which will break tests and
clients that assume higher throughput; update tests/test-rate-limits.sh to lower
TEST_COUNT to 10 or make TEST_COUNT configurable via env vars and default to
≤10, update any client integrations and documentation that reference >10 req/min
to reflect the new limit, and add a per-environment override (env var) or note a
staged rollout plan for workloads that need higher headroom.


# Test Rate Limits Script
# Usage: ./tests/test-rate-limits.sh [test_count] [create_db_url] [claim_db_url]
# Usage: ./tests/test-rate-limits.sh [test_count] [create_db_url] [claim_db_url] [agent_id]
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Add strict mode and safer loop quoting

Improve robustness and satisfy shellcheck.

 # Test Rate Limits Script
-# Usage: ./tests/test-rate-limits.sh [test_count] [create_db_url] [claim_db_url] [agent_id]
+# Usage: ./tests/test-rate-limits.sh [test_count] [create_db_url] [claim_db_url] [agent_id]
+set -Eeuo pipefail

Also change the loop and seq usage:

-    for i in $(seq 1 $TEST_COUNT); do
+    for i in $(seq 1 "${TEST_COUNT}"); do
🤖 Prompt for AI Agents
In tests/test-rate-limits.sh around line 4, the script lacks strict shell
settings and uses unsafe loop/seq and unquoted variables; enable robust shell
behavior by adding at the top: set -euo pipefail and IFS=$'\n\t', then update
the loop to avoid unquoted seq and word-splitting (for example replace
constructs like for i in $(seq 1 $test_count) with for i in $(seq 1
"$test_count") or preferably a POSIX-safe while loop using a counter), and
ensure all variable expansions in the loop and commands are quoted (e.g.,
"$create_db_url" "$claim_db_url" "$agent_id") to satisfy shellcheck and prevent
word-splitting.

Comment on lines 38 to 46
# Make the request with unique agent headers and capture body + status code
response=$(curl -s \
-H "x-agent: $AGENT_ID" \
-H "x-Agent: $AGENT_ID" \
-H "User-Agent: prisma-rate-limit-test/$AGENT_ID" \
-w "%{http_code}" \
-o /tmp/response_$i.json \
"$endpoint" 2>/dev/null)

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Quote variables (SC2086), remove duplicate header, and safe temp files

  • Quote expansions to avoid word splitting/globbing and satisfy shellcheck.
  • Remove duplicate x-Agent header.
  • Use mktemp and trap for cleanup.
-        # Make the request with unique agent headers and capture body + status code
-        response=$(curl -s \
-          -H "x-agent: $AGENT_ID" \
-          -H "x-Agent: $AGENT_ID" \
-          -H "User-Agent: prisma-rate-limit-test/$AGENT_ID" \
-          -w "%{http_code}" \
-          -o /tmp/response_$i.json \
-          "$endpoint" 2>/dev/null)
+        # Make the request with unique agent header and capture body + status code
+        tmp_dir="${TMP_DIR:-$(mktemp -d)}"
+        trap 'rm -rf "$tmp_dir"' EXIT
+        response="$(
+          curl -s \
+            -H "x-agent: ${AGENT_ID}" \
+            -H "User-Agent: prisma-rate-limit-test/${AGENT_ID}" \
+            -w "%{http_code}" \
+            -o "${tmp_dir}/response_${i}.json" \
+            "${endpoint}" 2>/dev/null
+        )"
🧰 Tools
🪛 Shellcheck (0.10.0)

[info] 44-44: Double quote to prevent globbing and word splitting.

(SC2086)

🤖 Prompt for AI Agents
In tests/test-rate-limits.sh around lines 38 to 46, the curl call uses unquoted
variable expansions, repeats the x-Agent header, and writes to a fixed temp
path; fix it by removing the duplicate header, quoting all variable expansions
(e.g. "$AGENT_ID" and "$endpoint") to prevent word-splitting/globbing, create a
unique temp file with mktemp (e.g. tmp=$(mktemp)) and write curl output to that
quoted path, and add a trap to remove the temp file on exit to ensure safe
cleanup.


# Small delay between requests
sleep 0.1
#sleep 0.05
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Optional: throttle to reduce noisy bursts

Consider re-introducing a small sleep (e.g., 20–50ms) to avoid overwhelming remote endpoints during ad-hoc runs.

🤖 Prompt for AI Agents
In tests/test-rate-limits.sh around line 65, the commented-out small sleep
("#sleep 0.05") should be re-enabled to throttle bursts; uncomment it and use a
portable short delay such as sleep 0.02–0.05 (e.g., sleep 0.02 or sleep 0.05) to
reduce request bursts during ad-hoc runs, or if sub-second precision is required
in the environment, replace with a portable usleep/timeout helper that sleeps
for 20–50ms.

echo ""
echo "💡 To test with your actual deployed URLs, run:"
echo " ./tests/test-rate-limits.sh 110 https://create-db-temp.prisma.io https://create-db.prisma.io"
echo " ./tests/test-rate-limits.sh 110 https://create-db-temp.prisma.io https://create-db.prisma.io my-local-agent"
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Example invocation: consider demonstrating local defaults

Recommend showing a local/miniflare example first to prevent accidental production load.

🤖 Prompt for AI Agents
In tests/test-rate-limits.sh around line 89, the example invocation only shows
production endpoints which may cause accidental load; add a local/miniflare
first so users see a safe default. Update the echo line to print a local example
(e.g., using localhost or miniflare URL and a local agent name) before the
existing production example, ensuring both examples are clearly separated and
labeled (local first, then production).

Copy link

Preview CLIs & Workers are live!

Test the CLIs locally under tag pr41-fix-rate-limit-16901773508:

npx create-db@pr41
npx create-pg@pr41
npx create-postgres@$pr41

Worker URLs
• Create-DB Worker:
• Claim-DB Worker:

These will live as long as this PR exists under tag pr41-fix-rate-limit-16901773508.

Copy link

Preview CLIs & Workers are live!

Test the CLIs locally under tag pr41-fix-rate-limit-16901787098:

npx create-db@pr41
npx create-pg@pr41
npx create-postgres@$pr41

Worker URLs
• Create-DB Worker:
• Claim-DB Worker:

These will live as long as this PR exists under tag pr41-fix-rate-limit-16901787098.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

♻️ Duplicate comments (2)
claim-db-worker/src/index.ts (1)

33-37: Do not trust client-controlled headers for rate-limit key; prefer CF IP, add safe fallback, and handle limiter errors

Prioritize cf-connecting-ip, add standard fallbacks, remove non-null assertion, and wrap the limiter call to avoid hard failures when the binding is unavailable.

-    // Use client IP for consistent rate limiting across environments
-    const clientIP = request.headers.get('x-agent') || request.headers.get('cf-connecting-ip');
-    const rayId = request.headers.get('cf-ray') || request.headers.get('x-ray-id') || 'unknown-ray-id';
-
-    const { success } = await env.CLAIM_DB_RATE_LIMITER.limit({ key: clientIP! });
+    // Use client IP for consistent rate limiting across environments (prefer CF header).
+    const clientIP =
+      request.headers.get('cf-connecting-ip') ||
+      request.headers.get('x-forwarded-for')?.split(',')[0]?.trim() ||
+      request.headers.get('x-real-ip') ||
+      // Allow explicit test override last
+      request.headers.get('x-agent') ||
+      'anonymous';
+    const rayId = request.headers.get('cf-ray') || request.headers.get('x-ray-id') || 'unknown-ray-id';
+
+    let success = true;
+    try {
+      ({ success } = await env.CLAIM_DB_RATE_LIMITER.limit({ key: clientIP }));
+    } catch (err) {
+      console.error('[ratelimit] binding error', err);
+      success = true; // fail-open; switch to false for fail-closed behavior
+    }
create-db-worker/src/index.ts (1)

15-22: Harden rate-limit key selection; avoid null keys; don’t prioritize client-controlled header; handle limiter failures

Prefer cf-connecting-ip, add standard fallbacks, remove the non-null assertion, avoid logging raw IPs, and wrap the limiter call to handle binding errors gracefully.

-    // Use client IP for consistent rate limiting across environments
-    const clientIP = request.headers.get('x-agent') || request.headers.get('cf-connecting-ip');
-    const rayId = request.headers.get('cf-ray') || request.headers.get('x-ray-id') || 'unknown-ray-id';
-
-    console.log(`Client IP: ${clientIP} - Request URL: ${request.url}`);
-
-    const { success } = await env.CREATE_DB_RATE_LIMITER.limit({ key: clientIP! });
+    // Use client IP for consistent rate limiting across environments (prefer CF header).
+    const clientIP =
+      request.headers.get('cf-connecting-ip') ||
+      request.headers.get('x-forwarded-for')?.split(',')[0]?.trim() ||
+      request.headers.get('x-real-ip') ||
+      // Allow explicit test override last
+      request.headers.get('x-agent') ||
+      'anonymous';
+    const rayId = request.headers.get('cf-ray') || request.headers.get('x-ray-id') || 'unknown-ray-id';
+
+    // Avoid logging raw IPs in production.
+    // console.log(`[ratelimit] key present: ${Boolean(clientIP)} - Request URL: ${request.url}`);
+
+    let success = true;
+    try {
+      ({ success } = await env.CREATE_DB_RATE_LIMITER.limit({ key: clientIP }));
+    } catch (err) {
+      console.error('[ratelimit] binding error', err);
+      success = true; // fail-open; switch to false for fail-closed behavior
+    }
📜 Review details

Configuration used: CodeRabbit UI
Review profile: ASSERTIVE
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between fb9fea4 and 200574e.

📒 Files selected for processing (2)
  • claim-db-worker/src/index.ts (1 hunks)
  • create-db-worker/src/index.ts (2 hunks)
🧰 Additional context used
🧬 Code Graph Analysis (1)
create-db-worker/src/index.ts (2)
claim-db-worker/worker-configuration.d.ts (1)
  • env (6799-6799)
create-db-worker/worker-configuration.d.ts (1)
  • env (6794-6794)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
  • GitHub Check: Workers Builds: claim-db-worker
  • GitHub Check: Workers Builds: create-db-worker
🔇 Additional comments (2)
claim-db-worker/src/index.ts (1)

35-35: Ray ID extraction LGTM

Using cf-ray with a fallback is sensible for correlating logs and client responses.

create-db-worker/src/index.ts (1)

49-49: LGTM: clarifies the test endpoint message

The updated message is concise and accurate.

Comment on lines +40 to +53
console.log(`Rate limit exceeded for IP: ${clientIP}. Ray ID: ${rayId}. Request blocked to prevent abuse.`);
return new Response(
JSON.stringify({
error: 'Too Many Requests',
message: 'You have exceeded the allowed number of requests. Please wait before trying again.',
code: 429,
ip: clientIP,
rayId,
}),
{
status: 429,
headers: { 'Content-Type': 'application/json' },
},
);
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Minimize PII: avoid logging/echoing raw IP; return only non-sensitive identifiers

Logging IPs and returning them in responses can be considered PII. Prefer logging rayId only and omit the ip field (or rename to a generic key if truly needed).

-      console.log(`Rate limit exceeded for IP: ${clientIP}. Ray ID: ${rayId}. Request blocked to prevent abuse.`);
+      console.log(`[429] Rate limit exceeded. Ray ID: ${rayId}. Request blocked to prevent abuse.`);
       return new Response(
         JSON.stringify({
           error: 'Too Many Requests',
           message: 'You have exceeded the allowed number of requests. Please wait before trying again.',
           code: 429,
-          ip: clientIP,
           rayId,
         }),
         {
           status: 429,
           headers: { 'Content-Type': 'application/json' },
         },
       );
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
console.log(`Rate limit exceeded for IP: ${clientIP}. Ray ID: ${rayId}. Request blocked to prevent abuse.`);
return new Response(
JSON.stringify({
error: 'Too Many Requests',
message: 'You have exceeded the allowed number of requests. Please wait before trying again.',
code: 429,
ip: clientIP,
rayId,
}),
{
status: 429,
headers: { 'Content-Type': 'application/json' },
},
);
console.log(`[429] Rate limit exceeded. Ray ID: ${rayId}. Request blocked to prevent abuse.`);
return new Response(
JSON.stringify({
error: 'Too Many Requests',
message: 'You have exceeded the allowed number of requests. Please wait before trying again.',
code: 429,
rayId,
}),
{
status: 429,
headers: { 'Content-Type': 'application/json' },
},
);
🤖 Prompt for AI Agents
In claim-db-worker/src/index.ts around lines 40 to 53, the handler logs and
returns the client's raw IP and includes it in the JSON response (PII). Remove
the raw IP from logs and response: change console.log to only mention the rayId
(and any non-PII token), and remove the ip property from the returned JSON (or
replace it with a non-identifying generic key like clientTag if absolutely
required). Ensure headers and status remain unchanged and do not expose the IP
elsewhere in this block.

Comment on lines 24 to 37
console.log(`Rate limit exceeded for IP: ${clientIP}. Ray ID: ${rayId}. Request blocked to prevent abuse.`);
return new Response(
JSON.stringify({
error: 'Too Many Requests',
message: 'You have exceeded the allowed number of requests. Please wait before trying again.',
code: 429,
ip: clientIP,
rayId,
}),
{
status: 429,
headers: { 'Content-Type': 'application/json' },
},
);
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Reduce PII exposure in 429 path

Avoid logging IPs and do not echo them back; rely on rayId for correlation.

-      console.log(`Rate limit exceeded for IP: ${clientIP}. Ray ID: ${rayId}. Request blocked to prevent abuse.`);
+      console.log(`[429] Rate limit exceeded. Ray ID: ${rayId}. Request blocked to prevent abuse.`);
       return new Response(
         JSON.stringify({
           error: 'Too Many Requests',
           message: 'You have exceeded the allowed number of requests. Please wait before trying again.',
           code: 429,
-          ip: clientIP,
           rayId,
         }),
         {
           status: 429,
           headers: { 'Content-Type': 'application/json' },
         },
       );
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
console.log(`Rate limit exceeded for IP: ${clientIP}. Ray ID: ${rayId}. Request blocked to prevent abuse.`);
return new Response(
JSON.stringify({
error: 'Too Many Requests',
message: 'You have exceeded the allowed number of requests. Please wait before trying again.',
code: 429,
ip: clientIP,
rayId,
}),
{
status: 429,
headers: { 'Content-Type': 'application/json' },
},
);
console.log(`[429] Rate limit exceeded. Ray ID: ${rayId}. Request blocked to prevent abuse.`);
return new Response(
JSON.stringify({
error: 'Too Many Requests',
message: 'You have exceeded the allowed number of requests. Please wait before trying again.',
code: 429,
rayId,
}),
{
status: 429,
headers: { 'Content-Type': 'application/json' },
},
);
🤖 Prompt for AI Agents
In create-db-worker/src/index.ts around lines 24 to 37, the 429 response path
currently logs and returns the client's IP and echoes it in the JSON body;
remove PII exposure by eliminating any use of clientIP in logs and the response
body, keep and log only the rayId for correlation, and update the JSON payload
to exclude the ip field while still returning status 429 and Content-Type
application/json; ensure any console logs reference only non-PII context (e.g.,
"Rate limit exceeded. Ray ID: {rayId}") and that clientIP is not stored or
returned.

Copy link

Preview CLIs & Workers are live!

Test the CLIs locally under tag pr41-fix-rate-limit-16902387945:

npx create-db@pr41
npx create-pg@pr41
npx create-postgres@$pr41

Worker URLs
• Create-DB Worker:
• Claim-DB Worker:

These will live as long as this PR exists under tag pr41-fix-rate-limit-16902387945.

@ankur-arch ankur-arch closed this Sep 10, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant